2型糖尿病(T2DM)中持续的高水平血糖可能会带来灾难性的长期健康后果。 T2DM临床干预措施的重要组成部分是监测饮食摄入,以使血浆葡萄糖水平保持在可接受的范围内。然而,当前监测食物摄入的技术是时间密集的,容易出错。为了解决这个问题,我们正在开发使用连续葡萄糖监测器(CGM)自动监测食物摄入量和这些食物组成的技术。本文介绍了一项临床研究的结果,其中参与者佩戴CGM时,参与者消耗了9份标准营养素的标准餐(碳水化合物,蛋白质和脂肪)。我们构建了一个多任务神经网络,以估计CGM信号的大量营养素组成,并将其与基线线性回归进行了比较。最好的预测结果来自我们提出的神经网络,该神经网络受试者依赖性数据训练,如均方根相对误差和相关系数所衡量。这些发现表明,可以从CGM信号中估算大量营养素组成,从而开发了开发自动技术以跟踪食物摄入量的可能性。
translated by 谷歌翻译
电子健康记录(EHR)系统以高频提供批判性,丰富和有价值的信息。EHR数据中最激动人心的应用之一正在开发具有来自生存分析的工具的实时死亡率警告系统。然而,最近使用的大多数生存分析方法基于使用静态协变量的(半)参数模型。这些模型不会利用时变EHR数据传达的信息。在这项工作中,我们展示了一种高度可扩展的生存分析方法,Boxhed 2.0基于模拟IV数据集的实时ICU死亡警告指示。重要的是,Boxhed可以以完全非参数的方式结合时间依赖的协变量,并通过理论来支持。我们的ICU死亡率模型实现了0.41和AUC-ROC的AUC-PRC为0.83的样品,展示了实时监测的好处。
translated by 谷歌翻译
鉴于难以获得医学图像识别任务的高质量标签,因此需要对小标签数据集进行充分调整的深度学习技术。自我监督学习技术的最新进展表明,这种内域表示学习方法可以为监督微调提供强大的初始化,这比从监督预读的任务中比标准转移学习更为数据效率。但是,这些应用程序不适用于以视频格式捕获的医学诊断。考虑到这一进展,我们开发了一种自我监督的学习方法,该方法迎合了超声心动图视频,目的是学习强有力的表现,以诊断主动脉瓣狭窄的任务(AS),这是一种主动脉瓣的常见和危险疾病,这是主动脉瓣的常见和危险疾病。当对1%的培训数据进行微调时,我们最好的自我监督学习模型可实现0.818 AUC(95%CI:0.794,0.840),而标准转移学习方法达到0.644 AUC(95%CI:0.610,0.677) 。我们还发现,我们的自我监督模型在预测严重的情况下,与显着图可视化所证明的严重相关。
translated by 谷歌翻译
由于大多数入院的患者生存,因此感兴趣的医疗事件(例如死亡率)通常以较低的速度发生。具有这种不平衡率(类密度差异)的训练模型可能会导致次优预测。传统上,这个问题是通过临时方法(例如重新采样或重新加权)来解决的,但在许多情况下的性能仍然有限。我们为此不平衡问题提出了一个培训模型的框架:1)我们首先将特征提取和分类过程分离,分别调整每个组件的训练批次,以减轻由类密度差异引起的偏差;2)我们既有密度感知的损失,又是错误分类的可学习成本矩阵。我们证明了模型在现实世界医学数据集(TOPCAT和MIMIC-III)中的改进性能,以显示与域中的基线相比,AUC-ROC,AUC-PRC,BRIER技能得分的改进。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck. These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays. Perhaps surprisingly, despite the surge of interest in large-scale, multi-agent reinforcement learning, almost nothing is known about the analogous question: Are common reinforcement learning (RL) algorithms also robust to similar perturbations? In this paper, we investigate this question by studying a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction, where a general compression operator is used to model the perturbation. Our main technical contribution is to show that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic theoretical guarantees as their SGD counterparts. We then extend our results significantly to nonlinear stochastic approximation algorithms and multi-agent settings. In particular, we prove that for multi-agent TD learning, one can achieve linear convergence speedups in the number of agents while communicating just $\tilde{O}(1)$ bits per agent at each time step. Our work is the first to provide finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling. Our analysis hinges on studying the drift of a novel Lyapunov function that captures the dynamics of a memory variable introduced by error feedback.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Research on automated essay scoring has become increasing important because it serves as a method for evaluating students' written-responses at scale. Scalable methods for scoring written responses are needed as students migrate to online learning environments resulting in the need to evaluate large numbers of written-response assessments. The purpose of this study is to describe and evaluate three active learning methods than can be used to minimize the number of essays that must be scored by human raters while still providing the data needed to train a modern automated essay scoring system. The three active learning methods are the uncertainty-based, the topological-based, and the hybrid method. These three methods were used to select essays included as part of the Automated Student Assessment Prize competition that were then classified using a scoring model that was training with the bidirectional encoder representations from transformer language model. All three active learning methods produced strong results, with the topological-based method producing the most efficient classification. Growth rate accuracy was also evaluated. The active learning methods produced different levels of efficiency under different sample size allocations but, overall, all three methods were highly efficient and produced classifications that were similar to one another.
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译